翻訳と辞書
Words near each other
・ Natural Sciences and Engineering Research Council
・ Natural Sciences Collections Association
・ Natural Sciences Museum of Albania
・ Natural landscape
・ Natural landscaping
・ Natural language
・ Natural Language and Linguistic Theory
・ Natural language API
・ Natural language generation
・ Natural language procedures
・ Natural language processing
・ Natural language programming
・ Natural Language Semantics
・ Natural Language Semantics Markup Language
・ Natural Language Toolkit
Natural language understanding
・ Natural language user interface
・ Natural law
・ Natural law (disambiguation)
・ Natural Law and Natural Rights
・ Natural Law Party
・ Natural Law Party (Trinidad and Tobago)
・ Natural Law Party (United States)
・ Natural Law Party of Canada
・ Natural Law Party of Canada candidates, 1993 Canadian federal election
・ Natural Law Party of Canada candidates, 1997 Canadian federal election
・ Natural Law Party of Canada candidates, 2000 Canadian federal election
・ Natural Law Party of Ontario candidates, 1995 Ontario provincial election
・ Natural Law Party of Ontario candidates, 1999 Ontario provincial election
・ Natural Law Party of Quebec candidates, 1994 Quebec provincial election


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Natural language understanding : ウィキペディア英語版
Natural language understanding

Natural language understanding is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension.
The process of disassembling and parsing input is more complex than the reverse process of assembling output in natural language generation because of the occurrence of unknown and unexpected features in the input and the need to determine the appropriate syntactic and semantic schemes to apply to it, factors which are pre-determined when outputting language.
There is considerable commercial interest in the field because of its application to news-gathering, text categorization, voice-activation, archiving, and large-scale content-analysis.
==History==
The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT is one of the earliest known attempts at natural language understanding by a computer.〔American Association for Artificial Intelligence ''Brief History of AI'' ()〕〔Daniel Bobrow's PhD Thesis (Natural Language Input for a Computer Problem Solving System ).〕〔''Machines who think'' by Pamela McCorduck 2004 ISBN 1-56881-205-1 page 286〕〔Russell, Stuart J.; Norvig, Peter (2003), ''Artificial Intelligence: A Modern Approach'' Prentice Hall, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/ , p. 19〕〔''Computer Science Logo Style: Beyond programming'' by Brian Harvey 1997 ISBN 0-262-58150-7 page 278〕 Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled ''Natural Language Input for a Computer Problem Solving System'') showed how a computer can understand simple natural language input to solve algebra word problems.
A year later, in 1965, Joseph Weizenbaum at MIT wrote ELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a database of real-world knowledge or a rich lexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by Ask.com.〔Weizenbaum, Joseph (1976). ''Computer power and human reason: from judgment to calculation'' W. H. Freeman and Company. ISBN 0-7167-0463-3 pages 188-189〕
In 1969 Roger Schank at Stanford University introduced the conceptual dependency theory for natural language understanding.〔Roger Schank, 1969, ''A conceptual dependency parser for natural language'' Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3〕 This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.
In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input.〔Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606 ()〕 Instead of ''phrase structure rules'' ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.
In 1971 Terry Winograd finished writing SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field.〔''Artificial intelligence: critical concepts'', Volume 1 by Ronald Chrisley, Sander Begeer 2000 ISBN 0-415-19332-X page 89〕〔Terry Winograd's SHRDLU page at Stanford (SHRDLU )〕 Winograd continued to be a major influence in the field with the publication of his book ''Language as a Cognitive Process''.〔Winograd, Terry (1983), ''Language as a Cognitive Process'', Addison–Wesley, Reading, MA.〕 At Stanford, Winograd would later be the adviser for Larry Page, who co-founded Google.
In the 1970s and 1980s the natural language processing group at SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, ''e.g.'', in 1982 Gary Hendrix formed Symantec Corporation originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse driven, graphic user interfaces Symantec changed direction. A number of other commercial efforts were started around the same time, ''e.g.'', Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems corp.〔Larry R. Harris, ''Research at the Artificial Intelligence corp.'' ACM SIGART Bulletin, issue 79, January 1982 ()〕〔''Inside case-based reasoning'' by Christopher K. Riesbeck, Roger C. Schank 1989 ISBN 0-89859-767-6 page xiii〕 In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnart.〔''In Depth Understanding: A Model of Integrated Process for Narrative Comprehension.''. Michael g. Dyer. MIT Press. ISBN 0-262-04073-5〕
The third millennium saw the introduction of intelligent personal assistant based on Machine learning, such as Apple's Siri, Google Now, Microsoft Cortana (software), Nuance's Nina and system using Machine learning for text classification, such as IBM Watson (computer). At the same time,the field saw of resurgence of rule-based systems with North_Side_Inc introducing technology able to clarify ambiguous or incomplete input and handle paraphrases, using a deterministic, rule-based approach. North Side relies on a large number of rules to understands language more precisely, making language-based video game such as Bot_Colony and financial transactions through voice or text-messaging feasible.()

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Natural language understanding」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.